Goto

Collaborating Authors

 legal issue


Can LLMs Create Legally Relevant Summaries and Analyses of Videos?

Hoeben-Kuil, Lyra, van Dijck, Gijs, Savelka, Jaromir, Gunawan, Johanna, Kollnig, Konrad, Kolacz, Marta, Duffourc, Mindy, Chakravarthy, Shashank, Westermann, Hannes

arXiv.org Artificial Intelligence

Understanding the legally relevant factual basis of an event and conveying it through text is a key skill of legal professionals. This skill is important for preparing forms (e.g., insurance claims) or other legal documents (e.g., court claims), but often presents a challenge for laypeople. Current AI approaches aim to bridge this gap, but mostly rely on the user to articulate what has happened in text, which may be challenging for many. Here, we investigate the capability of large language models (LLMs) to understand and summarize events occurring in videos. We ask an LLM to summarize and draft legal letters, based on 120 YouTube videos showing legal issues in various domains. Overall, 71.7\% of the summaries were rated as of high or medium quality, which is a promising result, opening the door to a number of applications in e.g. access to justice.


Intelligent Legal Assistant: An Interactive Clarification System for Legal Question Answering

Yao, Rujing, Wu, Yiquan, Zhang, Tong, Zhang, Xuhui, Huang, Yuting, Wu, Yang, Yang, Jiayin, Sun, Changlong, Wang, Fang, Liu, Xiaozhong

arXiv.org Artificial Intelligence

The rise of large language models has opened new avenues for users seeking legal advice. However, users often lack professional legal knowledge, which can lead to questions that omit critical information. This deficiency makes it challenging for traditional legal question-answering systems to accurately identify users' actual needs, often resulting in imprecise or generalized advice. In this work, we develop a legal question-answering system called Intelligent Legal Assistant, which interacts with users to precisely capture their needs. When a user poses a question, the system requests that the user select their geographical location to pinpoint the applicable laws. It then generates clarifying questions and options based on the key information missing from the user's initial question. This allows the user to select and provide the necessary details. Once all necessary information is provided, the system produces an in-depth legal analysis encompassing three aspects: overall conclusion, jurisprudential analysis, and resolution suggestions.


WithdrarXiv: A Large-Scale Dataset for Retraction Study

Rao, Delip, Young, Jonathan, Dietterich, Thomas, Callison-Burch, Chris

arXiv.org Artificial Intelligence

Retractions play a vital role in maintaining scientific integrity, yet systematic studies of retractions in computer science and other STEM fields remain scarce. We present WithdrarXiv, the first large-scale dataset of withdrawn papers from arXiv, containing over 14,000 papers and their associated retraction comments spanning the repository's entire history through September 2024. Through careful analysis of author comments, we develop a comprehensive taxonomy of retraction reasons, identifying 10 distinct categories ranging from critical errors to policy violations. We demonstrate a simple yet highly accurate zero-shot automatic categorization of retraction reasons, achieving a weighted average F1-score of 0.96. Additionally, we release WithdrarXiv-SciFy, an enriched version including scripts for parsed full-text PDFs, specifically designed to enable research in scientific feasibility studies, claim verification, and automated theorem proving. These findings provide valuable insights for improving scientific quality control and automated verification systems. Finally, and most importantly, we discuss ethical issues and take a number of steps to implement responsible data release while fostering open science in this area.


A Prompt Engineering Approach and a Knowledge Graph based Framework for Tackling Legal Implications of Large Language Model Answers

Hannah, George, Sousa, Rita T., Dasoulas, Ioannis, d'Amato, Claudia

arXiv.org Artificial Intelligence

With the recent surge in popularity of Large Language Models (LLMs), there is the rising risk of users blindly trusting the information in the response, even in cases where the LLM recommends actions that have potential legal implications and this may put the user in danger. We provide an empirical analysis on multiple existing LLMs showing the urgency of the problem. Hence, we propose a short-term solution consisting in an approach for isolating these legal issues through prompt re-engineering. We further analyse the outcomes but also the limitations of the prompt engineering based approach and we highlight the need of additional resources for fully solving the problem We also propose a framework powered by a legal knowledge graph (KG) to generate legal citations for these legal issues, enriching the response of the LLM.


Misinformation with Legal Consequences (MisLC): A New Task Towards Harnessing Societal Harm of Misinformation

Luo, Chu Fei, Shayanfar, Radin, Bhambhoria, Rohan, Dahan, Samuel, Zhu, Xiaodan

arXiv.org Artificial Intelligence

Misinformation, defined as false or inaccurate information, can result in significant societal harm when it is spread with malicious or even innocuous intent. The rapid online information exchange necessitates advanced detection mechanisms to mitigate misinformation-induced harm. Existing research, however, has predominantly focused on assessing veracity, overlooking the legal implications and social consequences of misinformation. In this work, we take a novel angle to consolidate the definition of misinformation detection using legal issues as a measurement of societal ramifications, aiming to bring interdisciplinary efforts to tackle misinformation and its consequence. We introduce a new task: Misinformation with Legal Consequence (MisLC), which leverages definitions from a wide range of legal domains covering 4 broader legal topics and 11 fine-grained legal issues, including hate speech, election laws, and privacy regulations. For this task, we advocate a two-step dataset curation approach that utilizes crowd-sourced checkworthiness and expert evaluations of misinformation. We provide insights about the MisLC task through empirical evidence, from the problem definition to experiments and expert involvement. While the latest large language models and retrieval-augmented generation are effective baselines for the task, we find they are still far from replicating expert performance.


Legal Aspects of Decentralized and Platform-Driven Economies

Compagnucci, Marcelo Corrales, Kono, Toshiyuki, Teramoto, Shinto

arXiv.org Artificial Intelligence

The sharing economy is sprawling across almost every sector and activity around the world. About a decade ago, there were only a handful of platform driven companies operating on the market. Zipcar, BlaBlaCar and Couchsurfing among them. Then Airbnb and Uber revolutionized the transportation and hospitality industries with a presence in virtually every major city. Access over ownership is the paradigm shift from the traditional business model that grants individuals the use of products or services without the necessity of buying them. Digital platforms, data and algorithm-driven companies as well as decentralized blockchain technologies have tremendous potential. But they are also changing the rules of the game. One of such technologies challenging the legal system are AI systems that will also reshape the current legal framework concerning the liability of operators, users and manufacturers. Therefore, this introductory chapter deals with explaining and describing the legal issues of some of these disruptive technologies. The chapter argues for a more forward-thinking and flexible regulatory structure.


Better Call GPT, Comparing Large Language Models Against Lawyers

Martin, Lauren, Whitehouse, Nick, Yiu, Stephanie, Catterson, Lizzie, Perera, Rivindu

arXiv.org Artificial Intelligence

However, as of the current state of research, there appears to be a significant gap in exploratory and experimental studies specifically addressing the capabilities of Generative AI and Large Language Models (LLMs) in the context of determination and discovery of legal issues. Such studies would be instrumental in understanding how these advanced AI technologies manage the intricate task of accurately classifying and pinpointing legal matters, a domain traditionally reliant on the deep, contextual, and specialised knowledge of human legal experts. To address the identified gap in the research landscape, this study proposes an experimental and exploratory analysis of the performance of LLMs in the legal domain. The research aims to evaluate the capabilities of LLMs contrasting their performance against human legal practitioners on high volume real-world legal tasks. These types of high volume legal tasks are frequently outsourced or pushed to less experienced lawyers, and given the rapid advancements made by LLMs, raises the question of whether LLMs have achieved a level of legal comprehension that is comparable to the quality, accuracy and efficiency of Junior Lawyers or outsourced legal practitioners on such tasks.


Top 10 Legal Issues in Artificial Intelligence

#artificialintelligence

It is pretty important to consider the legal implications of artificial intelligence (AI) and its use in various industries. Data Privacy and Security: AI systems generate and store large amounts of data, which can contain sensitive personal information. As such, data privacy and security are major concerns in the development and deployment of AI. Manufacturers, developers, and third-party vendors could all potentially be held liable in the case of an accident or injury caused by an AI system. This raises legal and ethical concerns.


AI can be a big help to healthcare workers, but there are legal issues to consider

#artificialintelligence

As burnout among healthcare workers continues to be a major concern, the use of artificial intelligence, EHRs and other automation tools may be able to have a positive impact on hospitals and health systems. When it comes to artificial intelligence, some legal issues arise. That's why we interviewed Carly Koza, an authority on this topic and Buchanan Ingersoll & Rooney associate. Buchanan Ingersoll & Rooney is a national law firm with 450 attorneys and government relations professionals across 15 offices representing companies including 50 of the Fortune 100. Koza discusses what healthcare provider organizations should prepare for when it comes to growing AI implementation, how AI can help combat increasing demands on healthcare workers, ways AI can help healthcare provider organizations ensure quality patient care, and legal matters that arise from these issues.


Researchers develop a new way to see how people feel about artificial intelligence

#artificialintelligence

People in Japan, the U.S. and Germany show different concerns regarding artificial intelligence (AI) being used in entertainment, shopping services, or to help find criminals, reports a new study in AI and Ethics. Japanese people tended to report more concern in AI used to fight crime, while Germans and Americans tended to report more concern over the ethical and social aspects of using AI in entertainment, according to the study. "We found there is a difference in the AI and ELSI [ethics, legal, and social issues] levels of understanding between countries. I think it will become important to carry out thorough discussions about the legal and policy issues surrounding AI," said first author and Kanazawa University Associate Professor Yuko Ikkatai. AI is currently being used in a wide range of fields, which has raised positive and negative attitudes in the general public.